783 research outputs found

    A Quantum-Proof Non-Malleable Extractor, With Application to Privacy Amplification against Active Quantum Adversaries

    Get PDF
    In privacy amplification, two mutually trusted parties aim to amplify the secrecy of an initial shared secret XX in order to establish a shared private key KK by exchanging messages over an insecure communication channel. If the channel is authenticated the task can be solved in a single round of communication using a strong randomness extractor; choosing a quantum-proof extractor allows one to establish security against quantum adversaries. In the case that the channel is not authenticated, Dodis and Wichs (STOC'09) showed that the problem can be solved in two rounds of communication using a non-malleable extractor, a stronger pseudo-random construction than a strong extractor. We give the first construction of a non-malleable extractor that is secure against quantum adversaries. The extractor is based on a construction by Li (FOCS'12), and is able to extract from source of min-entropy rates larger than 1/21/2. Combining this construction with a quantum-proof variant of the reduction of Dodis and Wichs, shown by Cohen and Vidick (unpublished), we obtain the first privacy amplification protocol secure against active quantum adversaries

    Modulus Computational Entropy

    Full text link
    The so-called {\em leakage-chain rule} is a very important tool used in many security proofs. It gives an upper bound on the entropy loss of a random variable XX in case the adversary who having already learned some random variables Z1,
,ZℓZ_{1},\ldots,Z_{\ell} correlated with XX, obtains some further information Zℓ+1Z_{\ell+1} about XX. Analogously to the information-theoretic case, one might expect that also for the \emph{computational} variants of entropy the loss depends only on the actual leakage, i.e. on Zℓ+1Z_{\ell+1}. Surprisingly, Krenn et al.\ have shown recently that for the most commonly used definitions of computational entropy this holds only if the computational quality of the entropy deteriorates exponentially in ∣(Z1,
,Zℓ)∣|(Z_{1},\ldots,Z_{\ell})|. This means that the current standard definitions of computational entropy do not allow to fully capture leakage that occurred "in the past", which severely limits the applicability of this notion. As a remedy for this problem we propose a slightly stronger definition of the computational entropy, which we call the \emph{modulus computational entropy}, and use it as a technical tool that allows us to prove a desired chain rule that depends only on the actual leakage and not on its history. Moreover, we show that the modulus computational entropy unifies other,sometimes seemingly unrelated, notions already studied in the literature in the context of information leakage and chain rules. Our results indicate that the modulus entropy is, up to now, the weakest restriction that guarantees that the chain rule for the computational entropy works. As an example of application we demonstrate a few interesting cases where our restricted definition is fulfilled and the chain rule holds.Comment: Accepted at ICTS 201

    When Can Limited Randomness Be Used in Repeated Games?

    Full text link
    The central result of classical game theory states that every finite normal form game has a Nash equilibrium, provided that players are allowed to use randomized (mixed) strategies. However, in practice, humans are known to be bad at generating random-like sequences, and true random bits may be unavailable. Even if the players have access to enough random bits for a single instance of the game their randomness might be insufficient if the game is played many times. In this work, we ask whether randomness is necessary for equilibria to exist in finitely repeated games. We show that for a large class of games containing arbitrary two-player zero-sum games, approximate Nash equilibria of the nn-stage repeated version of the game exist if and only if both players have Ω(n)\Omega(n) random bits. In contrast, we show that there exists a class of games for which no equilibrium exists in pure strategies, yet the nn-stage repeated version of the game has an exact Nash equilibrium in which each player uses only a constant number of random bits. When the players are assumed to be computationally bounded, if cryptographic pseudorandom generators (or, equivalently, one-way functions) exist, then the players can base their strategies on "random-like" sequences derived from only a small number of truly random bits. We show that, in contrast, in repeated two-player zero-sum games, if pseudorandom generators \emph{do not} exist, then Ω(n)\Omega(n) random bits remain necessary for equilibria to exist

    Attacking PUF-Based Pattern Matching Key Generators via Helper Data Manipulation

    Get PDF
    Abstract. Physically Unclonable Functions (PUFs) provide a unique signature for integrated circuits (ICs), similar to a fingerprint for humans. They are primarily used to generate secret keys, hereby exploiting the unique manufacturing variations of an IC. Unfortunately, PUF output bits are not perfectly reproducible and non-uniformly distributed. To obtain a high-quality key, one needs to implement additional post-processing logic on the same IC. Fuzzy extractors are the well-established standard solution. Pattern Matching Key Generators (PMKGs) have been proposed as an alternative. In this work, we demonstrate the latter construction to be vulnerable against manipulation of its public helper data. Full key recovery is possible, although depending on system design choices. We demonstrate our attacks using a 4-XOR arbiter PUF, manufactured in 65nm CMOS technology. We also propose a simple but effective countermeasure

    On the Communication Complexity of Secure Computation

    Full text link
    Information theoretically secure multi-party computation (MPC) is a central primitive of modern cryptography. However, relatively little is known about the communication complexity of this primitive. In this work, we develop powerful information theoretic tools to prove lower bounds on the communication complexity of MPC. We restrict ourselves to a 3-party setting in order to bring out the power of these tools without introducing too many complications. Our techniques include the use of a data processing inequality for residual information - i.e., the gap between mutual information and G\'acs-K\"orner common information, a new information inequality for 3-party protocols, and the idea of distribution switching by which lower bounds computed under certain worst-case scenarios can be shown to apply for the general case. Using these techniques we obtain tight bounds on communication complexity by MPC protocols for various interesting functions. In particular, we show concrete functions that have "communication-ideal" protocols, which achieve the minimum communication simultaneously on all links in the network. Also, we obtain the first explicit example of a function that incurs a higher communication cost than the input length in the secure computation model of Feige, Kilian and Naor (1994), who had shown that such functions exist. We also show that our communication bounds imply tight lower bounds on the amount of randomness required by MPC protocols for many interesting functions.Comment: 37 page

    Simulating Auxiliary Inputs, Revisited

    Get PDF
    For any pair (X,Z)(X,Z) of correlated random variables we can think of ZZ as a randomized function of XX. Provided that ZZ is short, one can make this function computationally efficient by allowing it to be only approximately correct. In folklore this problem is known as \emph{simulating auxiliary inputs}. This idea of simulating auxiliary information turns out to be a powerful tool in computer science, finding applications in complexity theory, cryptography, pseudorandomness and zero-knowledge. In this paper we revisit this problem, achieving the following results: \begin{enumerate}[(a)] We discuss and compare efficiency of known results, finding the flaw in the best known bound claimed in the TCC'14 paper "How to Fake Auxiliary Inputs". We present a novel boosting algorithm for constructing the simulator. Our technique essentially fixes the flaw. This boosting proof is of independent interest, as it shows how to handle "negative mass" issues when constructing probability measures in descent algorithms. Our bounds are much better than bounds known so far. To make the simulator (s,Ï”)(s,\epsilon)-indistinguishable we need the complexity O(s⋅25ℓϔ−2)O\left(s\cdot 2^{5\ell}\epsilon^{-2}\right) in time/circuit size, which is better by a factor ϔ−2\epsilon^{-2} compared to previous bounds. In particular, with our technique we (finally) get meaningful provable security for the EUROCRYPT'09 leakage-resilient stream cipher instantiated with a standard 256-bit block cipher, like AES256\mathsf{AES256}.Comment: Some typos present in the previous version have been correcte

    Quantitative information flow under generic leakage functions and adaptive adversaries

    Full text link
    We put forward a model of action-based randomization mechanisms to analyse quantitative information flow (QIF) under generic leakage functions, and under possibly adaptive adversaries. This model subsumes many of the QIF models proposed so far. Our main contributions include the following: (1) we identify mild general conditions on the leakage function under which it is possible to derive general and significant results on adaptive QIF; (2) we contrast the efficiency of adaptive and non-adaptive strategies, showing that the latter are as efficient as the former in terms of length up to an expansion factor bounded by the number of available actions; (3) we show that the maximum information leakage over strategies, given a finite time horizon, can be expressed in terms of a Bellman equation. This can be used to compute an optimal finite strategy recursively, by resorting to standard methods like backward induction.Comment: Revised and extended version of conference paper with the same title appeared in Proc. of FORTE 2014, LNC

    Weak randomness completely trounces the security of QKD

    Get PDF
    In usual security proofs of quantum protocols the adversary (Eve) is expected to have full control over any quantum communication between any communicating parties (Alice and Bob). Eve is also expected to have full access to an authenticated classical channel between Alice and Bob. Unconditional security against any attack by Eve can be proved even in the realistic setting of device and channel imperfection. In this Letter we show that the security of QKD protocols is ruined if one allows Eve to possess a very limited access to the random sources used by Alice. Such knowledge should always be expected in realistic experimental conditions via different side channels

    Approximating open quantum system dynamics in a controlled and efficient way: A microscopic approach to decoherence

    Get PDF
    We demonstrate that the dynamics of an open quantum system can be calculated efficiently and with predefined error, provided a basis exists in which the system-environment interactions are local and hence obey the Lieb-Robinson bound. We show that this assumption can generally be made. Defining a dynamical renormalization group transformation, we obtain an effective Hamiltonian for the full system plus environment that comprises only those environmental degrees of freedom that are within the effective light cone of the system. The reduced system dynamics can therefore be simulated with a computational effort that scales at most polynomially in the interaction time and the size of the effective light cone. Our results hold for generic environments consisting of either discrete or continuous degrees of freedom
    • 

    corecore